Big Tech Hasn’t Fixed AI’s Misinformation Problem—Yet

7 minute read
Ideas
Giansiracusa is a professor of mathematics and data science at Bentley University, and author of How Algorithms Create and Prevent Fake News. Marcus is a professor emeritus at NYU, Founder and CEO of Geometric Intelligence, and author of five books including Guitar Zero and Rebooting AI

The scrappy underdog AI firm OpenAI has stirred the sleeping tech giants with its generative AI products, most recently and most prominently the conversational chatbot ChatGPT. Microsoft spent $10 billion on a partnership with OpenAI in an attempt to leap-frog its younger big tech competitors by weaving AI into many products; Google internally declared a “code red” and is cutting red tape to put out AI products more quickly, including a direct competitor to ChatGPT that was just announced; meanwhile Mark Zuckerberg has declared his intent to make Meta a “leader in generative AI,” clearly a reaction to the attention OpenAI is garnering. The products these companies are suddenly striving for sound similar, but who will be the winner? Although much discussion has centered around the size of the AI models and how much data they are trained on, there’s another factor that may matter a lot, too: the degree to which the contenders build trustworthy systems that don’t unduly harm society and further destabilize democracy.

OpenAI’s earlier text generation product GPT-3 grabbed a lot of attention but never saw the widespread consumer adoption that ChatGPT has attained. The biggest change from GPT-3 to ChatGPT isn’t model size—they’re believed to be comparable in magnitude—it is that after GPT-3’s training process of siphoning up statistical correlations from internet text, ChatGPT further underwent an extensive human feedback process to improve its output. A large part of this refinement was filtering out offensive and harmful output. ChatGPT is both more useful than GPT-3 and more palatable to mainstream consumers, and its record-setting success reflects this.

By comparison, another recent generative text system was Meta’s Galactica, which was essentially a variant of GPT-3 that was fine-tuned to summarize and write scientific papers. Lacking ChatGPT’s elaborate toxicity filters (and preceding it by a couple weeks), it lasted three days before Meta retracted it due to intense public criticism. The issue was the way it could be leveraged to produce dangerous scientific and medical disinformation at scale. This point was driven home by a user who effortlessly generated a legitimate-looking medical paper on the health benefits of eating broken glass.

Read More: AI Chatbots Are Getting Better. But an Interview With ChatGPT Reveals Their Limits

Weaponized disinformation is hardly a niche or hypothetical concern. In the immediate aftermath of ChatGPT, both the top AI conference and the most popular coding Q&A site banned AI generated content, the latter explaining that the site has been flooded with plausible-looking but incorrect answers generated by AI, more than the moderators can keep up with. The Eurasia Group, a prominent geopolitical consultancy organization, recently put out a report on the top anticipated risks of 2023 and placed generative AI technology as the third biggest— behind only the increasingly aggressive pair of China and Russia—for its ability to “erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets.” They are far from alone in sounding this AI alarm.

Rather than recognizing and admitting this risk with generative text, Meta’s outspoken Chief AI Scientist & VP Yann LeCun arrogantly dismissed it. He also blamed users for Galactica’s disastrous rollout.

Microsoft similarly tried to blame users for abusing an early chatbot, Tay, that it released and then rapidly withdrew, in 2016, but Microsoft’s awareness of how powerful online technology can harm society has grown tremendously since then. More recently, the contrast couldn’t be greater between the statements from executives at Microsoft and Meta on the risks of AI. Earlier this month, Microsoft’s Vice Chair and President Brad Smith wrote a company blogpost discussing the need to develop AI responsibly and Microsoft’s efforts to do so since 2017. (That this effort started right after the racist chatbot drew unwanted attention to the company is perhaps not a coincidence.) In this post, Smith warns, “Some unfortunately will use this technology to exploit the flaws in human nature, deliberately target people with false information, undermine democracy and explore new ways to advance the pursuit of evil.” He also asserts that “we need to have wide-ranging and deep conversations and commit to joint action to define the guardrails for the future.” This is the right attitude to have with respect to AI, and it has been Meta’s mistake—both morally and commercially—to choose otherwise.

Meta’s attention to the ethical dimensions of the technology it enlists is spotty at best, from blindness to gaping data privacy holes to relying on underpaid contract workers in horrific work environments. Its emerging attitude toward AI, as frequently conveyed through the company’s vocal AI Chief, is no less worrisome. For example, LeCun recently claimed the company’s AI filters catch nearly all of the hate speech on the platform—even though evidence revealed by Facebook whistleblower Frances Haugen suggests the situation is quite the opposite. When commenting on the chatbot Meta released last August but which failed to attract users, LeCun admitted Meta’s chatbot “was panned by people who tried it.” But he took the wrong lesson when he said “it was boring because it was made safe.” By most accounts, the safety features on Meta’s chatbot were crude, cumbersome, and ineffective. As ChatGPT has demonstrated, there is consumer demand, and investor interest, in safe AI. (Government regulation will certainly be necessary too.)

It will be interesting to see where Google lies on the spectrum between the responsible AI discussed in the recent blogpost by Microsoft’s president and the more reckless, shambolic approach Meta seems to be taking. In the past, Google’s CEO Sundar Pichai has expressed concern over AI and the need to proceed with caution. A small but indicative difference between his mindset and Zuckerberg’s when it comes to the unintended societal consequences of powerful technology is in the aftermath of Trump’s surprising 2016 election victory when he was asked whether fake news could have potentially played a decisive role and he answered: “Sure. You know, I think fake news as a whole could be an issue.” Zuckerberg, on the other hand, dismissed the idea as “pretty crazy.”

If ChatGPT succeeded where Meta’s attempts at generative AI failed, it is mostly because OpenAI cared about keeping its chatbot from creating a toxic spew, whereas Meta did far less to reduce such harms. But what OpenAI did is just a start; the problem of misinformation is by no means solved. ChatGPT’s toxicity guardrails are easily evaded by those bent on using it for evil and as we saw earlier this week, all the new search engines continue to hallucinate. For now, Microsoft’s updated Bing search engine has been treated with kid gloves by the media, while Google’s Bard has been subject to ridicule. But once we get past the opening day jitters, what will really count is whether any of the big players can build artificial intelligence that we can genuinely trust.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.